In this Section we introduce a second general paradigm for effective cross-validation - or the effective search for a proper capacity model. With the first apprach discussed in the previous Section - boosting - we took a 'bottom-up' approach to fine tuning the proper amount of capacity a model needs: that is we began with a low capacity model and then gradually increased its capacity by adding additional units (from the same family of universal approximators) until we built up 'just enough' capacity (that is the amount that minimizes validation error).
In this Section we introduce the complementary approach - called regularization. Instead of building up capacity 'starting at the bottom' with regularization we 'start at the top', taking a 'top-down' view and start off with a high very capacity model (that is one which would likely overfit, providing a low training error but high validation error) and gradually decrease its capacity until the capacity is 'just right' (that is, until validation error is minimized). While in principle any universal approximator can be used with regularization, in practice regularization is often the cross-validation approach of choice when employing both kernel and neural network universal approximators.
## This code cell will not be shown in the HTML version of this notebook
# imports from custom library
import sys
sys.path.append('../../')
from mlrefined_libraries import math_optimization_library as optlib
from mlrefined_libraries import nonlinear_superlearn_library as nonlib
from mlrefined_libraries import basics_library
# demos for this notebook
regress_plotter = nonlib.nonlinear_regression_demos_multiple_panels
classif_plotter = nonlib.nonlinear_classification_visualizer_multiple_panels
static_plotter = optlib.static_plotter.Visualizer()
basic_runner = nonlib.basic_runner
classif_plotter_crossval = nonlib.crossval_classification_visualizer
datapath = '../../mlrefined_datasets/nonlinear_superlearn_datasets/'
# import autograd functionality to bulid function's properly for optimizers
import autograd.numpy as np
# import timer
from datetime import datetime
import copy
import math
import pickle
# this is needed to compensate for %matplotlib notebook's tendancy to blow up images when plotted inline
%matplotlib notebook
from matplotlib import rcParams
rcParams['figure.autolayout'] = True
%load_ext autoreload
%autoreload 2
Imagine for a moment that we have a simple nonlinear regression dataset, like the one shown in the left panel of the Figure below, and we use a single model - made up of a sum of universal approximators of a given type - with far too much capacity to try to fit this data properly. In other words, we train our high capacity model on a training portion of this data via minimization of an appropriate cost function like e.g., the Least Squares cost. In the left panel we also show a corresponding fit provided by our overfitting model in red, which wildly overfits the data.
In a high capacity model like this one we have clearly used too many and/or too flexible universal approximators (feature transformations). Equally important to diagnosing the problem of overfitting is how well we tune our model's parameters or - in other words - how well we minimize its corresponding cost function. In the present case for example, the parameter setting of our model in the middle panel that overfit our training data come from near the minimum of the model's cost function. This cost function is drawn figuratively in the right panel, where the minimum is shown as a red point. This is true in general as well with high capacity models - regardless of the kind / how many feature transformations we use a model will overfit a training set only when we tune its parameters well or, in other words, when we minimize its corresponding cost function well. Conversely, even if we use a high capacity model, if we do not tune its parameters well a model will not overfit its training data.
The general set of regularization procedures for cross-validation can be thought of as a product of this insight. With regularization we perform cross-validation, fine tuning the appropriate nonlinearity of a high capacity model by setting its parameters purposfully away from the global minima of its associated cost function. As we will see this can be done in a variety of ways, but in general selecting an appropriate set of parameters away from a cost's global minima (parameters that provoke low validation error for the model) is performed sequentially. Regularization can be performed a variety of ways, but in general there are two basic categories of strategies which we will discuss: early stopping and the addition of a simple capacity-blunting function to the cost.
Using the dial visualization of cross-validation introduced in Section 11.2.2 we can think about the regularization procedures as starting with the dial set all the way to the right (at a model with extremely high capacity). We then look to move our model's parameters away from the global minima of its associated cost function sequentially, and in doing this we are turning this dial very gradually counter-clockwise from right to left, decreasing the capacity of the model very gradually in search of a model with low validation error.
As with boosting, here with regularization we want to search as carefully as possible, turning our cross-validation dial as smoothly as computation resources allow counter-clockwise from right to left.
<< SMOOTH VERSUS JERKY DIAL TURNING >>
Regularization techniques for capacity tuning leverage precisely this overfitting-optimization connection, and in general work by preventing the complete minimization of a cost function associated with a high capacity model of the standard form
\begin{equation} \text{model}\left(\mathbf{x},\Theta\right) = w_0 + f_1\left(\mathbf{x}\right){w}_{1} + f_2\left(\mathbf{x}\right){w}_{2} + \cdots + f_B\left(\mathbf{x}\right)w_B \end{equation}Whether we use kernel, neural network, or tree-based units in order to produce a model with high capacity we use a large number of units and / or units with high capacity. Families of each universal approximator type consisting of high capacity units - e.g., deep neural networks and trees - are described in detail in the three Chapters following this one. In practice regularization is very often used as the cross-validation approach of choice when employing both kernel and neural network universal approximators.
The overfitting problem is inherently tied to completely minimizing the cost function of a model with too much capacity for a given dataset. The early-stopping idea presented above offers a simple solution to this problem - do not minimize this cost function too well, and in particular halt when validation error is at its lowest. Doing this will prevent local optimization schemes from reaching points too close to the cost function's global mininma, and producing weights that cause overfitting to occur. Obviously this is strongly optimization-view of overfitting prevention - which is common to most regularization techniques - since we are doing something to avoid the global minima of a cost function.
With early stopping we halt optimization before reaching global minima, with other regularization techniques we change the location of a cost function's global minima so that even if we fully minimize it we do not recover a set of weights that produces overfitting. This is typically done by adding a simple function - called a regularizer - to the original cost function. Doing this we change its shape, and in particular change the location of its global minima. Since the global minima of the adjusted cost function do not align with those of the original cost, the adjusted cost can then be completely minimized with less fear of overfitting to the training data.
This method of regularization is illustrated figuratively via the animation below. In the left panel we show a prototypical single input cost function $g(w)$, in the middle panel is shown a simple function we will add to it (here a quadratic $w^2$) and in the right panel we show their linear combination $g(w) + \lambda w^2$. As we increase $\lambda > 0$ (moving the slider left to right) notice how the cost's single global minimum - the minimum evaluation is shown as a red dot and the corresponding input $w$ is shown as a red 'x' - moves. As more and more of the quadratic is added to the original cost function its minimum - shown as a green dot and corresponding input shown as a green 'x' - moves away from the original.
# This code cell will not be shown in the HTML version of this notebook
# what function should we play with? Defined in the next line.
g1 = lambda w: (w - 0.5)**2
g1 = lambda w: np.sin(3*w) + 0.1*w**2 - 0.7
g2 = lambda w: w**2
# create an instance of the visualizer with this function
test = basics_library.convex_function_addition_2d_min_highlight.visualizer()
# plot away
test.draw_it(g1 = g1,g2 = g2,num_frames = 100,min_range = -4,max_range = 8, alpha_range=(0,0.2),title1 = '$g(w)$',title2 = '$w^2$', title3='$g(w) + \\lambda w^2$')
A complete minimization of the cost function plus the simple addition - for any value of $\lambda > 0$ - will not reach the global minimum of the original cost, and overfitting is prevented provided $\lambda$ is set large enough. On the other hand, in making sure $\lambda$ is not set too large (or else the cost function itself is completely drowned out by the simple function we added to it, and then we are essentially just minimizing the simple function alone) the sum still somewhat resembles the original function, and its global minimum will lie 'close enough' to the original cost's that the weight $w$ it provides enables a good fit to a corresponding dataset. This general idea is shown figuratively below. By regularizing the original cost function in this way we can find weights that lie away from the global minima of the original cost function, which when used with our high capacity model provide a good fit.
Before looking at a simple example note a few important items.
since $\mathbf{w}^T\mathbf{w} = \Vert \mathbf{w} \Vert_2^2$. This expresses the simple quadratic in the form of the $\ell_2$ norm.
In this example we use a quadratic regularizer to fit a proper nonlinear regression to the prototypical regression dataset shown in the left panel below. Here the training set is shown in blue, and the validation in yellow. We use a high capacity model (with respect to this data) - here a degree $8$ polynomial - trying out $20$ values of $\lambda$ between $0$ and $1$ (completely minimizing the corresponding regularized cost in each instance). As the slider is moved from left to right the fit provided by the weights recovered from the global minimum of each regularized cost function is shown in red in the left panel, while the corresponding training and validation errors are shown in blue and yellow respectively in the right panel. In this simple experiment, a value somewhere around $\lambda \approx 0.1$ appears to provide the lowest validation error and corresponding best fit to the dataset overall.
# This code cell will not be shown in the HTML version of this notebook
# load in dataset
csvname = datapath + 'noisy_sin_sample.csv'
data = np.loadtxt(csvname,delimiter = ',')
x = data[:-1,:]
y = data[-1:,:]
# start process
num_units = 20
degree = 8
train_portion = 0.66
lambdas = np.logspace(-1,1,num_units)
lambdas = np.hstack((0,lambdas))
lambdas = np.linspace(0,1,num_units)
runs1 = []
w = 0
for j in range(num_units):
lam = lambdas[j]
# initialize with input/output data
mylib1 = nonlib.reg_lib.super_setup.Setup(x,y)
# perform preprocessing step(s) - especially input normalization
mylib1.preprocessing_steps(normalizer = 'none')
# split into training and validation sets
if j == 0:
# make training testing split
mylib1.make_train_val_split(train_portion = train_portion)
train_inds = mylib1.train_inds
val_inds = mylib1.val_inds
else: # use split from first run for all further runs
mylib1.x_train = mylib1.x[:,train_inds]
mylib1.y_train = mylib1.y[:,train_inds]
mylib1.x_val = mylib1.x[:,val_inds]
mylib1.y_val = mylib1.y[:,val_inds]
mylib1.train_inds = train_inds
mylib1.val_inds = val_inds
mylib1.train_portion = train_portion
# choose cost
mylib1.choose_cost(name = 'least_squares')
# choose dimensions of fully connected multilayer perceptron layers
mylib1.choose_features(feature_name = 'polys',degree = degree)
if j == 0:
# fit an optimization
mylib1.fit(algo = 'newtons_method',max_its = 1,verbose = False,lam = lam)
else:
mylib1.fit(algo = 'newtons_method',max_its = 1,verbose = False,lam = lam,w=w)
# add model to list
runs1.append(copy.deepcopy(mylib1))
w = mylib1.w_init
# animate the business
frames = num_units
demo1 = nonlib.regularization_regression_animators.Visualizer(csvname)
demo1.animate_trainval_regularization(runs1,frames,num_units,show_history = True)
# This code cell will not be shown in the HTML version of this notebook
# load in dataset
csvname = datapath + 'noisy_sin_sample.csv'
data = np.loadtxt(csvname,delimiter = ',')
x = data[:-1,:]
y = data[-1:,:]
# start process
num_units = 20
degree = 8
train_portion = 0.66
lambdas = np.logspace(-1,1,num_units)
lambdas = np.hstack((0,lambdas))
lambdas = np.linspace(0,1,num_units)
runs1 = []
w = 0
for j in range(num_units):
lam = lambdas[j]
# initialize with input/output data
mylib1 = nonlib.reg_lib.super_setup.Setup(x,y)
# perform preprocessing step(s) - especially input normalization
mylib1.preprocessing_steps(normalizer = 'none')
# split into training and validation sets
if j == 0:
# make training testing split
mylib1.make_train_val_split(train_portion = train_portion)
train_inds = mylib1.train_inds
val_inds = mylib1.val_inds
else: # use split from first run for all further runs
mylib1.x_train = mylib1.x[:,train_inds]
mylib1.y_train = mylib1.y[:,train_inds]
mylib1.x_val = mylib1.x[:,val_inds]
mylib1.y_val = mylib1.y[:,val_inds]
mylib1.train_inds = train_inds
mylib1.val_inds = val_inds
mylib1.train_portion = train_portion
# choose cost
mylib1.choose_cost(name = 'least_squares')
# choose dimensions of fully connected multilayer perceptron layers
mylib1.choose_features(feature_name = 'polys',degree = degree)
if j == 0:
# fit an optimization
mylib1.fit(algo = 'newtons_method',max_its = 1,verbose = False,lam = lam)
else:
mylib1.fit(algo = 'newtons_method',max_its = 1,verbose = False,lam = lam,w=w)
# add model to list
runs1.append(copy.deepcopy(mylib1))
w = mylib1.w_init
# animate the business
frames = num_units
demo1 = nonlib.regularization_regression_animators.Visualizer(csvname)
demo1.animate_trainval_regularization(runs1,frames,num_units,show_history = True)